# Wikipedia Pre-training
Bert Base Japanese Wikipedia Ud Head
This is a BERT model specifically designed for Japanese dependency parsing, used to detect head words in long unit words, implemented in a question-answering format.
Sequence Labeling
Transformers Japanese

B
KoichiYasuoka
474
1
Mluke Large Lite
Apache-2.0
mLUKE is the multilingual extension of LUKE, supporting named entity recognition, relation classification, and question answering tasks in 24 languages
Large Language Model
Transformers Supports Multiple Languages

M
studio-ousia
65
2
Bert Small Japanese
A small BERT model pre-trained on Japanese Wikipedia, optimized for financial text mining
Large Language Model
Transformers Japanese

B
izumi-lab
358
5
Bert Base Japanese Char Whole Word Masking
A BERT model pre-trained on Japanese text using character-level tokenization and whole word masking techniques, suitable for Japanese natural language processing tasks.
Large Language Model Japanese
B
tohoku-nlp
1,724
4
Bert Base Japanese Char V2
BERT model pre-trained on Japanese text using character-level tokenization and whole word masking, trained on the Japanese Wikipedia version as of August 31, 2020
Large Language Model Japanese
B
tohoku-nlp
134.28k
6
Bertinho Gl Base Cased
A pre-trained BERT model for Galician (12 layers, case-sensitive). Trained on Wikipedia data.
Large Language Model Other
B
dvilares
218
3
Bert Base Japanese Basic Char V2
This is a Japanese BERT pre-trained model based on character-level tokenization and whole word masking techniques, requiring no dependency on `fugashi` or `unidic_lite` toolkits.
Large Language Model
Transformers Japanese

B
hiroshi-matsuda-rit
14
0
Distilbert Base Tr Cased
Apache-2.0
This is a lightweight version of distilbert-base-multilingual-cased, specifically optimized for Turkish while maintaining original accuracy.
Large Language Model
Transformers Other

D
Geotrend
21
0
Bert Base Fr Cased
Apache-2.0
A compact French version of the BERT model customized from bert-base-multilingual-cased, maintaining original accuracy
Large Language Model French
B
Geotrend
18
1
Xlm Mlm 100 1280
The XLM model is a cross-lingual language model pre-trained on Wikipedia texts in 100 languages using masked language modeling objectives.
Large Language Model
Transformers Supports Multiple Languages

X
FacebookAI
296
4
Featured Recommended AI Models